20 research outputs found

    CARPE-ID: Continuously Adaptable Re-identification for Personalized Robot Assistance

    Full text link
    In today's Human-Robot Interaction (HRI) scenarios, a prevailing tendency exists to assume that the robot shall cooperate with the closest individual or that the scene involves merely a singular human actor. However, in realistic scenarios, such as shop floor operations, such an assumption may not hold and personalized target recognition by the robot in crowded environments is required. To fulfil this requirement, in this work, we propose a person re-identification module based on continual visual adaptation techniques that ensure the robot's seamless cooperation with the appropriate individual even subject to varying visual appearances or partial or complete occlusions. We test the framework singularly using recorded videos in a laboratory environment and an HRI scenario, i.e., a person-following task by a mobile robot. The targets are asked to change their appearance during tracking and to disappear from the camera field of view to test the challenging cases of occlusion and outfit variations. We compare our framework with one of the state-of-the-art Multi-Object Tracking (MOT) methods and the results show that the CARPE-ID can accurately track each selected target throughout the experiments in all the cases (except two limit cases). At the same time, the s-o-t-a MOT has a mean of 4 tracking errors for each video

    Omnidirectional Walking Pattern Generator Combining Virtual Constraints and Preview Control for Humanoid Robots

    Get PDF
    This paper presents a novel omnidirectional walking pattern generator for bipedal locomotion combining two structurally different approaches based on the virtual constraints and the preview control theories to generate a flexible gait that can be modified on-line. The proposed strategy synchronizes the displacement of the robot along the two planes of walking: the zero moment point based preview control is responsible for the lateral component of the gait, while the sagittal motion is generated by a more dynamical approach based on virtual constraints. The resulting algorithm is characterized by a low computational complexity and high flexibility, requisite for a successful deployment to humanoid robots operating in real world scenarios. This solution is motivated by observations in biomechanics showing how during a nominal gait the dynamic motion of the human walk is mainly generated along the sagittal plane. We describe the implementation of the algorithm and we detail the strategy chosen to enable omnidirectionality and on-line gait tuning. Finally, we validate our strategy through simulation experiments using the COMAN + platform, an adult size humanoid robot developed at Istituto Italiano di Tecnologia. Finally, the hybrid walking pattern generator is implemented on real hardware, demonstrating promising results: the WPG trajectories results in open-loop stable walking in the absence of external disturbances

    Multi-contact planning and control for humanoid robots: Design and validation of a complete framework

    Get PDF
    In this paper, we consider the problem of generating appropriate motions for a torque- controlled humanoid robot that is assigned a multi-contact loco-manipulation task, i.e., a task that requires the robot to move within the environment by repeatedly establishing and breaking multiple, non-coplanar contacts. To this end, we present a complete multi-contact planning and control framework for multi-limbed robotic systems, such as humanoids. The planning layer works offline and consists of two sequential modules: first, a stance planner computes a sequence of feasible contact combinations; then, a whole-body planner finds the sequence of collision-free humanoid motions that realize them while respecting the physical limitations of the robot. For the challenging problem posed by the first stage, we propose a novel randomized approach that does not require the specification of pre-designed potential contacts or any kind of pre-computation. The control layer produces online torque commands that enable the humanoid to execute the planned motions while guaranteeing closed-loop balance. It relies on two modules, i.e., the stance switching and reactive balancing module; their combined action allows it to withstand possible execution inaccuracies, external disturbances, and modeling uncertainties. Numerical and experimental results obtained on COMAN+, a torque-controlled humanoid robot designed at Istituto Italiano di Tecnologia, validate our framework for loco-manipulation tasks of different complexity

    XBotCore: A Real-Time Cross-Robot Software Platform

    Get PDF
    Muratore L, Laurenzi A, Hoffman EM, Rocchi A, Caldwell DG, Tsagarakis NG. XBotCore: A Real-Time Cross-Robot Software Platform. In: IEEE International Conference on Robotic Computing, IRC17. 2017

    WoLF: the Whole-body Locomotion Framework for Quadruped Robots

    Full text link
    The Whole-Body Locomotion Framework (WoLF) is an end-to-end software suite devoted to the loco-manipulation of quadruped robots. WoLF abstracts the complexity of planning and control of quadrupedal robot hardware into a simple to use and robust software that can be connected through multiple tele-operation devices to different quadruped robot models. Furthermore, WoLF allows controlling mounted devices, such as arms or pan-tilt cameras, jointly with the quadrupedal platform. In this short paper, we introduce the main features of WoLF and its overall software architecture

    On the Implementation of the Inverse Kinematics Solver Used in the WALK-MAN Humanoid Robot

    No full text
    Hoffman EM, Rocchi A, Tsagarakis NG. On the Implementation of the Inverse Kinematics Solver Used in the WALK-MAN Humanoid Robot. In: The 34th Annual Conference of the Robotics Society of Japan (RSJ) 2016. 2016

    CartesI/O: A ROS Based Real-Time Capable Cartesian Control Framework

    No full text
    International audienceThis work introduces a framework for the Carte-sian control of multi-legged, highly redundant robots. The proposed framework allows the untrained user to perform complex motion tasks with robotics platforms by leveraging a simple, auto-generated ROS-based interface. Contrary to other motion control frameworks (e.g. ROS MoveIt!), we focus on the execution of Cartesian trajectories that are specified online, rather than planned in advance, as it is the case, for instance, in tele-operation and locomotion tasks. Moreover, we address the problem of generating such motions within a hard real-time (RT) control loop. Finally, we demonstrate the capabilities of our framework both on the COMAN+ humanoid robot, and on the hybrid wheeled-legged quadruped CENTAURO
    corecore